29 research outputs found

    Mitigating airport congestion : market mechanisms and airline response models

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (leaves 157-165).Efficient allocation of scarce resources in networks is an important problem worldwide. In this thesis, we focus on resource allocation problems in a network of congested airports. The increasing demand for access to the world's major commercial airports combined with the limited operational capacity at many of these airports have led to growing air traffic congestion resulting in several billion dollars of delay cost every year. In this thesis, we study two demand-management techniques -- strategic and operational approaches -- to mitigate airport congestion. As a strategic initiative, auctions have been proposed to allocate runway slot capacity. We focus on two elements in the design of such slot auctions -- airline valuations and activity rules. An aspect of airport slot market environments, which we argue must be considered in auction design, is the fact that the participating airlines are budget-constrained. -- The problem of finding the best bundle of slots on which to bid in an iterative combinatorial auction, also called the preference elicitation problem, is a particularly hard problem, even more in the case of airlines in a slot auction. We propose a valuation model, called the Aggregated Integrated Airline Scheduling and Fleet Assignment Model, to help airlines understand the true value of the different bundles of slots in the auction. This model is efficient and was found to be robust to data uncertainty in our experimental simulations.(cont.) -- Activity rules are checks made by the auctioneer at the end of every round to suppress strategic behavior by bidders and to promote consistent, continual preference elicitation. These rules find applications in several real world scenarios including slot auctions. We show that the commonly used activity rules are not applicable for slot auctions as they prevent straightforward behavior by budget-constrained bidders. We propose the notion of a strong activity rule which characterizes straightforward bidding strategies. We then show how a strong activity rule in the context of budget-constrained bidders (and quasilinear bidders) can be expressed as a linear feasibility problem. This work on activity rules also applies to more general iterative combinatorial auctions.We also study operational (real-time) demand-management initiatives that are used when there are sudden drops in capacity at airports due to various uncertainties, such as bad-weather. We propose a system design that integrates the capacity allocation, airline recovery and inter-airline slot exchange procedures, and suggest metrics to evaluate the different approaches to fair allocations.by Pavithra Harsha.Ph.D

    An Optimistic-Robust Approach for Dynamic Positioning of Omnichannel Inventories

    Full text link
    We introduce a new class of data-driven and distribution-free optimistic-robust bimodal inventory optimization (BIO) strategy to effectively allocate inventory across a retail chain to meet time-varying, uncertain omnichannel demand. While prior Robust optimization (RO) methods emphasize the downside, i.e., worst-case adversarial demand, BIO also considers the upside to remain resilient like RO while also reaping the rewards of improved average-case performance by overcoming the presence of endogenous outliers. This bimodal strategy is particularly valuable for balancing the tradeoff between lost sales at the store and the costs of cross-channel e-commerce fulfillment, which is at the core of our inventory optimization model. These factors are asymmetric due to the heterogenous behavior of the channels, with a bias towards the former in terms of lost-sales cost and a dependence on network effects for the latter. We provide structural insights about the BIO solution and how it can be tuned to achieve a preferred tradeoff between robustness and the average-case. Our experiments show that significant benefits can be achieved by rethinking traditional approaches to inventory management, which are siloed by channel and location. Using a real-world dataset from a large American omnichannel retail chain, a business value assessment during a peak period indicates over a 15% profitability gain for BIO over RO and other baselines while also preserving the (practical) worst case performance

    Hierarchy-guided Model Selection for Time Series Forecasting

    Full text link
    Generalizability of time series forecasting models depends on the quality of model selection. Temporal cross validation (TCV) is a standard technique to perform model selection in forecasting tasks. TCV sequentially partitions the training time series into train and validation windows, and performs hyperparameter optmization (HPO) of the forecast model to select the model with the best validation performance. Model selection with TCV often leads to poor test performance when the test data distribution differs from that of the validation data. We propose a novel model selection method, H-Pro that exploits the data hierarchy often associated with a time series dataset. Generally, the aggregated data at the higher levels of the hierarchy show better predictability and more consistency compared to the bottom-level data which is more sparse and (sometimes) intermittent. H-Pro performs the HPO of the lowest-level student model based on the test proxy forecasts obtained from a set of teacher models at higher levels in the hierarchy. The consistency of the teachers' proxy forecasts help select better student models at the lowest-level. We perform extensive empirical studies on multiple datasets to validate the efficacy of the proposed method. H-Pro along with off-the-shelf forecasting models outperform existing state-of-the-art forecasting methods including the winning models of the M5 point-forecasting competition

    Integrating transcriptomic and proteomic data for accurate assembly and annotation of genomes

    Get PDF
    © 2017 Wong et al.; Published by Cold Spring Harbor Laboratory Press. Complementing genome sequence with deep transcriptome and proteome data could enable more accurate assembly and annotation of newly sequenced genomes. Here, we provide a proof-of-concept of an integrated approach for analysis of the genome and proteome of Anopheles stephensi, which is one of the most important vectors of the malaria parasite. To achieve broad coverage of genes, we carried out transcriptome sequencing and deep proteome profiling of multiple anatomically distinct sites. Based on transcriptomic data alone, we identified and corrected 535 events of incomplete genome assembly involving 1196 scaffolds and 868 protein-coding gene models. This proteogenomic approach enabled us to add 365 genes that were missed during genome annotation and identify 917 gene correction events through discovery of 151 novel exons, 297 protein extensions, 231 exon extensions, 192 novel protein start sites, 19 novel translational frames, 28 events of joining of exons, and 76 events of joining of adjacent genes as a single gene. Incorporation of proteomic evidence allowed us to change the designation of more than 87 predicted noncoding RNAs to conventional mRNAs coded by protein-coding genes. Importantly, extension of the newly corrected genome assemblies and gene models to 15 other newly assembled Anopheline genomes led to the discovery of a large number of apparent discrepancies in assembly and annotation of these genomes. Our data provide a framework for how future genome sequencing efforts should incorporate transcriptomic and proteomic analysis in combination with simultaneous manual curation to achieve near complete assembly and accurate annotation of genomes

    Abstract

    No full text
    We present a quantum auction protocol using superpositions to represent bids and distributed search to identify the winner(s). Measuring the final quantum state gives the auction outcome while simultaneously destroying the superposition. Thus non-winning bids are never revealed. Participants can use entanglement to arrange for correlations among their bids, with the assurance that this entanglement is not observable by others. The protocol is useful for information hiding applications, such as partnership bidding with allocative externality or concerns about revealing bidding preferences. The protocol applies to a variety of auction types, e.g., first or second price, and to auctions involving either a single item or arbitrary bundles of items (i.e., combinatorial auctions). We analyze the game-theoretical behavior of the quantum protocol for the simple case of a sealed-bid quantum, and show how a suitably designed adiabatic search reduces the possibilities for bidders to game the auction. This design illustrates how incentive rather that computational constraints affect quantum algorithm choices. 1

    NEVUS COMEDONICUS SYNDROME

    No full text

    End-to-End Learning for Optimization via Constraint-Enforcing Approximators

    No full text
    In many real-world applications, predictive methods are used to provide inputs for downstream optimization problems. It has been shown that using the downstream task-based objective to learn the intermediate predictive model is often better than using only intermediate task objectives, such as prediction error. The learning task in the former approach is referred to as end-to-end learning. The difficulty in end-to-end learning lies in differentiating through the optimization problem. Therefore, we propose a neural network architecture that can learn to approximately solve these optimization problems, particularly ensuring its output satisfies the feasibility constraints via alternate projections. We show these projections converge at a geometric rate to the exact projection. Our approach is more computationally efficient than existing methods as we do not need to solve the original optimization problem at each iteration. Furthermore, our approach can be applied to a wider range of optimization problems. We apply this to a shortest path problem for which the first stage forecasting problem is a computer vision task of predicting edge costs from terrain maps, a capacitated multi-product newsvendor problem, and a maximum matching problem. We show that this method out-performs existing approaches in terms of final task-based loss and training time
    corecore